60 research outputs found
Sparse Regression Codes for Multi-terminal Source and Channel Coding
We study a new class of codes for Gaussian multi-terminal source and channel
coding. These codes are designed using the statistical framework of
high-dimensional linear regression and are called Sparse Superposition or
Sparse Regression codes. Codewords are linear combinations of subsets of
columns of a design matrix. These codes were recently introduced by Barron and
Joseph and shown to achieve the channel capacity of AWGN channels with
computationally feasible decoding. They have also recently been shown to
achieve the optimal rate-distortion function for Gaussian sources. In this
paper, we demonstrate how to implement random binning and superposition coding
using sparse regression codes. In particular, with minimum-distance
encoding/decoding it is shown that sparse regression codes attain the optimal
information-theoretic limits for a variety of multi-terminal source and channel
coding problems.Comment: 9 pages, appeared in the Proceedings of the 50th Annual Allerton
Conference on Communication, Control, and Computing - 201
An Achievable Rate Region for the Broadcast Channel with Feedback
A single-letter achievable rate region is proposed for the two-receiver
discrete memoryless broadcast channel with generalized feedback. The coding
strategy involves block-Markov superposition coding, using Marton's coding
scheme for the broadcast channel without feedback as the starting point. If the
message rates in the Marton scheme are too high to be decoded at the end of a
block, each receiver is left with a list of messages compatible with its
output. Resolution information is sent in the following block to enable each
receiver to resolve its list. The key observation is that the resolution
information of the first receiver is correlated with that of the second. This
correlated information is efficiently transmitted via joint source-channel
coding, using ideas similar to the Han-Costa coding scheme. Using the result,
we obtain an achievable rate region for the stochastically degraded AWGN
broadcast channel with noisy feedback from only one receiver. It is shown that
this region is strictly larger than the no-feedback capacity region.Comment: To appear in IEEE Transactions on Information Theory. Contains
example of AWGN Broadcast Channel with noisy feedbac
Lossy Compression via Sparse Linear Regression: Performance under Minimum-distance Encoding
We study a new class of codes for lossy compression with the squared-error
distortion criterion, designed using the statistical framework of
high-dimensional linear regression. Codewords are linear combinations of
subsets of columns of a design matrix. Called a Sparse Superposition or Sparse
Regression codebook, this structure is motivated by an analogous construction
proposed recently by Barron and Joseph for communication over an AWGN channel.
For i.i.d Gaussian sources and minimum-distance encoding, we show that such a
code can attain the Shannon rate-distortion function with the optimal error
exponent, for all distortions below a specified value. It is also shown that
sparse regression codes are robust in the following sense: a codebook designed
to compress an i.i.d Gaussian source of variance with
(squared-error) distortion can compress any ergodic source of variance less
than to within distortion . Thus the sparse regression ensemble
retains many of the good covering properties of the i.i.d random Gaussian
ensemble, while having having a compact representation in terms of a matrix
whose size is a low-order polynomial in the block-length.Comment: This version corrects a typo in the statement of Theorem 2 of the
published pape
Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding
We propose computationally efficient encoders and decoders for lossy
compression using a Sparse Regression Code. The codebook is defined by a design
matrix and codewords are structured linear combinations of columns of this
matrix. The proposed encoding algorithm sequentially chooses columns of the
design matrix to successively approximate the source sequence. It is shown to
achieve the optimal distortion-rate function for i.i.d Gaussian sources under
the squared-error distortion criterion. For a given rate, the parameters of the
design matrix can be varied to trade off distortion performance with encoding
complexity. An example of such a trade-off as a function of the block length n
is the following. With computational resource (space or time) per source sample
of O((n/\log n)^2), for a fixed distortion-level above the Gaussian
distortion-rate function, the probability of excess distortion decays
exponentially in n. The Sparse Regression Code is robust in the following
sense: for any ergodic source, the proposed encoder achieves the optimal
distortion-rate function of an i.i.d Gaussian source with the same variance.
Simulations show that the encoder has good empirical performance, especially at
low and moderate rates.Comment: 14 pages, to appear in IEEE Transactions on Information Theor
Recommended from our members
Finite-sample analysis of Approximate Message Passing.
Approximate message passing (AMP) refers to a class of efficient algorithms for statistical estimation in high-dimensional problems such as compressed sensing and low-rank matrix estimation. This paper analyzes the performance of AMP in the regime where the problem dimension is large but finite. For concreteness, we consider the setting of high-dimensional regression, where the goal is to estimate a high-dimensional vector from a noisy measurement . AMP is a low-complexity, scalable algorithm for this problem. Under suitable assumptions on the measurement matrix , AMP has the attractive feature that its performance can be accurately characterized in the large system limit by a simple scalar iteration called state evolution. Previous proofs of the validity of state evolution have all been asymptotic convergence results. In this paper, we derive a concentration inequality for AMP with i.i.d.\ Gaussian measurement matrices with finite size . The result shows that the probability of deviation from the state evolution prediction falls exponentially in . This provides theoretical support for empirical findings that have demonstrated excellent agreement of AMP performance with state evolution predictions for moderately large dimensions. The concentration inequality also indicates that the number of AMP iterations can grow no faster than order for the performance to be close to the state evolution predictions with high probability. The analysis can be extended to obtain similar non-asymptotic results for AMP in other settings such as low-rank matrix estimation.Marie Curie Career Integration Grant under Grant Agreement Number 631489
PCA Initialization for Approximate Message Passing in Rotationally Invariant Models
We study the problem of estimating a rank- signal in the presence of
rotationally invariant noise-a class of perturbations more general than
Gaussian noise. Principal Component Analysis (PCA) provides a natural
estimator, and sharp results on its performance have been obtained in the
high-dimensional regime. Recently, an Approximate Message Passing (AMP)
algorithm has been proposed as an alternative estimator with the potential to
improve the accuracy of PCA. However, the existing analysis of AMP requires an
initialization that is both correlated with the signal and independent of the
noise, which is often unrealistic in practice. In this work, we combine the two
methods, and propose to initialize AMP with PCA. Our main result is a rigorous
asymptotic characterization of the performance of this estimator. Both the AMP
algorithm and its analysis differ from those previously derived in the Gaussian
setting: at every iteration, our AMP algorithm requires a specific term to
account for PCA initialization, while in the Gaussian case, PCA initialization
affects only the first iteration of AMP. The proof is based on a two-phase
artificial AMP that first approximates the PCA estimator and then mimics the
true AMP. Our numerical simulations show an excellent agreement between AMP
results and theoretical predictions, and suggest an interesting open direction
on achieving Bayes-optimal performance.Comment: 72 pages, 2 figures, appeared in Neural Information Processing
Systems (NeurIPS), 202
Recommended from our members
Approximate Message Passing with Spectral Initialization for Generalized Linear Models.
We consider the problem of estimating a signal from measurements obtained via a generalized linear model. We focus on estimators based on approximate message passing (AMP), a family of iterative algorithms
with many appealing features: the performance of AMP in the high-dimensional limit
can be succinctly characterized under suitable model assumptions; AMP can also be
tailored to the empirical distribution of the
signal entries, and for a wide class of estimation problems, AMP is conjectured to be optimal among all polynomial-time algorithms.
However, a major issue of AMP is that in
many models (such as phase retrieval), it requires an initialization correlated with the
ground-truth signal and independent from
the measurement matrix. Assuming that
such an initialization is available is typically
not realistic. In this paper, we solve this
problem by proposing an AMP algorithm initialized with a spectral estimator. With such
an initialization, the standard AMP analysis fails since the spectral estimator depends
in a complicated way on the design matrix.
Our main contribution is a rigorous characterization of the performance of AMP with
spectral initialization in the high-dimensional
limit. The key technical idea is to define
and analyze a two-phase artificial AMP algorithm that first produces the spectral estimator, and then closely approximates the
iterates of the true AMP. We also provide numerical results that demonstrate the validity
of the proposed approach
- …